Building Consumer Agency

Agentic AI has profound implications for every part of society. A world in which everyone has access to a personal AI tutor means big changes to education. A world in which everyone has access to a personal medical advisor means big changes to healthcare. And so on.

Given our work at Consumer Reports, I’m particularly interested in what agentic AI means for consumers and their power in the marketplace.

AI agents are poised to transform search, e-commerce, advertising, marketing, and customer care – in short, to rewire how consumers connect and transact in the marketplace.

While there’s been lots of attention and investment in AI agents for the enterprise, we’re most excited about the potential for “consumer-driven” AI agents that help people better navigate the marketplace.

Imagine employing an agent that understands the products you’re researching, gathers information, and helps you make the best purchase (see Deep Research for a glimpse of where this is headed). Or an agent that comprehends a customer service issue you’re facing, reasons through acceptable resolutions, and negotiates with businesses until you’re satisfied (squint, and you can see the early outlines of this in OpenAI’s Operator tool).

People need confidence that the agents they choose for these kinds of tasks are trustworthy and protect them from exploitation. Put another way: it will become increasingly important to distinguish between agents that work for you, and agents that work on you.

To accomplish this requires a combination of legal, technical, and design guarantees. An AI agent must know very personal details about its user, deeply understand the user’s goals and preferences, and act in ways that are expected and explainable. An agent must not subvert a user’s agency, or have an inappropriate hold on their emotions, or act illegally. These are hard problems with significant implications for the economy and society.

In the immediate term, there’s also significant uncertainty around how businesses should interoperate with these agents, given the privacy, security, reputational and logistical challenges they pose. 

But if we can solve some of these challenges, consumers will be able to pick and choose specialized agents that are most appropriate to the problems they’re trying to solve. And the upside is enormous: AI agents with a strong “duty of loyalty” could help us make better choices, effectively represent our interests, and transact more freely—with a fraction of the time and effort. Over time, that should lead to a healthier marketplace.

On the other hand: if we don’t create the right norms and standards, AI agents developed and controlled by corporate interests could work in ways that are subtly counter to a consumer’s interests. Instead of acting as impartial advocates, these agents could subtly nudge users toward choices that benefit corporations—steering them toward higher-margin products, hidden fees, or disadvantageous contracts under the guise of “helpful” recommendations. The fine line between persuasion and manipulation could blur, and the very agents promising to “empower” consumers might instead make them easier to exploit.

Who will champion the needs of everyday people in this increasingly complex and highly reactive world? 

Here’s a thought experiment: what if everyone had access to a “personal consumer advocate” that understands their goals and values, works tirelessly, and looks out for them? Agents like these could serve individual consumers while also protecting consumers as a class—analyzing patterns across different consumer groups and organizing collective action.

I’m hopeful we’ll see investment from startups, businesses, and public interest tech to build this new kind of consumer agency in the coming years. This could be a powerful way to strengthen consumers’ power in the marketplace, especially if we can’t rely on regulators.

Finally: in our coming agentic world, informed choice will be as important as ever. There will be a lot of competition to build the most advanced and the most specialized agents, but builders of consumer-facing AI agents will also need to prove their systems’ loyalty to users—no one will trust an AI agent that doesn’t have their best interests at heart. Everyone will need to know whether their agents are trustworthy stewards of data, prioritize their interests over those of counterparties, and protect them from manipulation or exploitation.

This will be a crucial area for consumer protection research—and an opportunity to build creative policy, product and technical solutions. The ultimate prize is an internet that works better for consumers. If you’re reading this and have been thinking along these lines, please get in touch!

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy